Black-Box Adversarial Attack on Time Series Classification

نویسندگان

چکیده

With the increasing use of deep neural network (DNN) in time series classification (TSC), recent work reveals threat adversarial attack, where adversary can construct examples to cause model mistakes. However, existing researches on attack TSC typically adopt an unrealistic white-box setting with details transparent adversary. In this work, we study a more rigorous black-box detection applied, which restricts gradient access and requires example be also stealthy. Theoretical analyses reveal that key lies in: estimating diversity non-convexity models resolved, restricting l0 norm perturbation samples. Towards end, propose new framework named BlackTreeS, solves hard optimization issue for construction two simple yet effective modules. particular, tree search strategy find influential positions sequence, independently estimate gradients these positions. Extensive experiments three real-world datasets five DNN based validate effectiveness e.g., it improves success rate from 19.3% 27.3%, decreases 90.9% 6.8% LSTM UWave dataset.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Query-Efficient Black-box Adversarial Examples

Current neural network-based image classifiers are susceptible to adversarial examples, even in the black-box setting, where the attacker is limited to query access without access to gradients. Previous methods — substitute networks and coordinate-based finite-difference methods — are either unreliable or query-inefficient, making these methods impractical for certain problems. We introduce a n...

متن کامل

Structured Black Box Variational Inference for Latent Time Series Models

Continuous latent time series models are prevalent in Bayesian modeling; examples include the Kalman filter, dynamic collaborative filtering, or dynamic topic models. These models often benefit from structured, non mean field variational approximations that capture correlations between time steps. Black box variational inference with reparameterization gradients (BBVI) allows us to explore a ri...

متن کامل

Simple Black-Box Adversarial Perturbations for Deep Networks

Deep neural networks are powerful and popular learning models that achieve state-of-the-art pattern recognition performance on many computer vision, speech, and language processing tasks. However, these networks have also been shown susceptible to carefully crafted adversarial perturbations which force misclassification of the inputs. Adversarial examples enable adversaries to subvert the expec...

متن کامل

Blocking Transferability of Adversarial Examples in Black-Box Learning Systems

Advances in Machine Learning (ML) have led to its adoption as an integral component in many applications, including banking, medical diagnosis, and driverless cars. To further broaden the use of ML models, cloud-based services offered by Microsoft, Amazon, Google, and others have developed ML-as-a-service tools as black-box systems. However, ML classifiers are vulnerable to adversarial examples...

متن کامل

Delving into Transferable Adversarial Examples and Black-box Attacks

An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferabilit...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i6.25896